Goto

Collaborating Authors

 stick figure


Gesture Generation from Trimodal Context for Humanoid Robots

Tang, Shiyi, Dondrup, Christian

arXiv.org Artificial Intelligence

Natural co-speech gestures are essential components to improve the experience of Human-robot interaction (HRI). However, current gesture generation approaches have many limitations of not being natural, not aligning with the speech and content, or the lack of diverse speaker styles. Therefore, this work aims to repoduce the work by Yoon et,al generating natural gestures in simulation based on tri-modal inputs and apply this to a robot. During evaluation, ``motion variance'' and ``Frechet Gesture Distance (FGD)'' is employed to evaluate the performance objectively. Then, human participants were recruited to subjectively evaluate the gestures. Results show that the movements in that paper have been successfully transferred to the robot and the gestures have diverse styles and are correlated with the speech. Moreover, there is a significant likeability and style difference between different gestures.


Can Humans Do Less-Than-One-Shot Learning?

Malaviya, Maya, Sucholutsky, Ilia, Oktar, Kerem, Griffiths, Thomas L.

arXiv.org Artificial Intelligence

Being able to learn from small amounts of data is a key characteristic of human intelligence, but exactly {\em how} small? In this paper, we introduce a novel experimental paradigm that allows us to examine classification in an extremely data-scarce setting, asking whether humans can learn more categories than they have exemplars (i.e., can humans do "less-than-one shot" learning?). An experiment conducted using this paradigm reveals that people are capable of learning in such settings, and provides several insights into underlying mechanisms. First, people can accurately infer and represent high-dimensional feature spaces from very little data. Second, having inferred the relevant spaces, people use a form of prototype-based categorization (as opposed to exemplar-based) to make categorical inferences. Finally, systematic, machine-learnable patterns in responses indicate that people may have efficient inductive biases for dealing with this class of data-scarce problems.


Seeing Through Walls

Communications of the ACM

Machine vision coupled with artificial intelligence (AI) has made great strides toward letting computers understand images. Thanks to deep learning, which processes information in a way analogous to the human brain, machine vision is doing everything from keeping self-driving cars on the right track to improving cancer diagnosis by examining biopsy slides or x-ray images. Now some researchers are going beyond what the human eye or a camera lens can see, using machine learning to watch what people are doing on the other side of a wall. The technique relies on low-power radio frequency (RF) signals, which reflect off living tissue and metal but pass easily through wooden or plaster interior walls. AI can decipher those signals, not only to detect the presence of people, but also to see how they are moving, and even to predict the activity they are engaged in, from talking on a phone to brushing their teeth.


Artificial intelligence senses people through walls

#artificialintelligence

X-ray vision has long seemed like a far-fetched sci-fi fantasy, but over the last decade a team led by Professor Dina Katabi from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) has continually gotten us closer to seeing through walls. Their latest project, "RF-Pose," uses artificial intelligence (AI) to teach wireless devices to sense people's postures and movement, even from the other side of a wall. The researchers use a neural network to analyze radio signals that bounce off people's bodies, and can then create a dynamic stick figure that walks, stops, sits, and moves its limbs as the person performs those actions. The team says that RF-Pose could be used to monitor diseases like Parkinson's, multiple sclerosis (MS), and muscular dystrophy, providing a better understanding of disease progression and allowing doctors to adjust medications accordingly. It could also help elderly people live more independently, while providing the added security of monitoring for falls, injuries and changes in activity patterns.


Deepfakes for dancing: you can now use AI to fake those dance moves you always wanted

#artificialintelligence

Artificial intelligence is proving to be a very capable tool when it comes to manipulating videos of people. Face-swapping deepfakes have been the most visible example, but new applications are being found every day. Call it deepfakes for dancing -- it uses AI to read someone's dance moves and copy them on to a target body. The actual science here was done by a quartet of researchers from UC Berkley. As they describe in a paper posted on arXiV, their system is comprised of a number of discrete steps.


New technology can see your body through walls

#artificialintelligence

MIT's Computer Science and Artificial Intelligence Laboratory has created a system that can see your body through walls, recreating your poses when you walk, sit, or simply stand still. It uses RF waves to sense where you are and then recreates your body as a simple stick figure. The researchers use a neural network to analyze radio signals that bounce off people's bodies, and can then create a dynamic stick figure that walks, stops, sits and moves its limbs as the person performs those actions. The team says that the system could be used to monitor diseases like Parkinson's and multiple sclerosis (MS), providing a better understanding of disease progression and allowing doctors to adjust medications accordingly. It could also help elderly people live more independently, while providing the added security of monitoring for falls, injuries and changes in activity patterns.


X-ray vision will soon give soldiers the ability to see through walls

FOX News

File photo: South Korean and U.S. Marines take part in a winter military drill in Pyeongchang, South Korea, December 19, 2017. Bionic soldiers with X-ray vision could soon be a reality thanks to a new wireless system that uses radio-waves to map people's movements behind walls. Researchers at MIT trained artificial intelligence to analyze radio signals that bounce off human bodies to create a dynamic stick figure that mimics a person's actions. The so-called neural network can sense people's postures and movement even from the outside of a building or room. MIT says the tech can be embedded into a wireless device, which would theoretically allow soldiers to hook it up to their combat gear – like helmets and night-vision goggles.


AI built to track you through walls because, er, Parkinsons?

#artificialintelligence

VID AI systems can track the movements of people hidden behind walls by inspecting radio waves reflected off their bodies, according to a new study. The model dubbed "RF-Pose" starts off by transmitting low power radio signals that can penetrate through walls using a wireless Wi-Fi device. These waves bounce back when reflected off a human body, creating heatmaps, and these are then processed by a neural network to build a two-dimensional stick figure representing the person behind the wall. During the training process, RF-Pose is trained using both the heatmap images created from their device and images taken from cameras. The researchers from the Massachusetts Institute of Technology (MIT) collected more than 50 hours of footage of people in 50 different environments, including people walking along corridors, or engaging in a lesson in a classroom.


AI senses people's pose through walls

#artificialintelligence

X-ray vision has long seemed like a far-fetched sci-fi fantasy, but over the last decade a team led by Professor Dina Katabi from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) has continually gotten us closer to seeing through walls. Their latest project, "RF-Pose," uses artificial intelligence (AI) to teach wireless devices to sense people's postures and movement, even from the other side of a wall. The researchers use a neural network to analyze radio signals that bounce off people's bodies, and can then create a dynamic stick figure that walks, stops, sits and moves its limbs as the person performs those actions. The team says that the system could be used to monitor diseases like Parkinson's and multiple sclerosis (MS), providing a better understanding of disease progression and allowing doctors to adjust medications accordingly. It could also help elderly people live more independently, while providing the added security of monitoring for falls, injuries and changes in activity patterns.


This AI can see people through walls. Here's how.

#artificialintelligence

Radio signals coupled with artificial intelligence have allowed researchers to do something fascinating: see skeleton-like representations of people moving on the other side of a wall. And while it sounds like the kind of technology a SWAT team would love to have before kicking through a door, it's already been used in a surprising way--to monitor the movements of Parkinson's patients in their homes. Interest in this type of technology dates back decades, says Dina Katabi, the senior researcher on the project and a professor of electrical engineering and computer science at MIT. "There was a big project by DARPA to try to detect people through walls and use wireless signals," she says. But before this most recent research, the best these systems could do was reveal a "blob" shape of a person behind a wall. The technology is now capable of revealing something more precise: it depicts the people in the scene as skeleton-like stick figures, and can show them moving in real time as they do normal activities, like walk or sit down. It focuses on key points of the body, including joints like elbows, hips, and feet.